21 research outputs found

    About Adaptive Coding on Countable Alphabets: Max-Stable Envelope Classes

    Full text link
    In this paper, we study the problem of lossless universal source coding for stationary memoryless sources on countably infinite alphabets. This task is generally not achievable without restricting the class of sources over which universality is desired. Building on our prior work, we propose natural families of sources characterized by a common dominating envelope. We particularly emphasize the notion of adaptivity, which is the ability to perform as well as an oracle knowing the envelope, without actually knowing it. This is closely related to the notion of hierarchical universal source coding, but with the important difference that families of envelope classes are not discretely indexed and not necessarily nested. Our contribution is to extend the classes of envelopes over which adaptive universal source coding is possible, namely by including max-stable (heavy-tailed) envelopes which are excellent models in many applications, such as natural language modeling. We derive a minimax lower bound on the redundancy of any code on such envelope classes, including an oracle that knows the envelope. We then propose a constructive code that does not use knowledge of the envelope. The code is computationally efficient and is structured to use an {E}xpanding {T}hreshold for {A}uto-{C}ensoring, and we therefore dub it the \textsc{ETAC}-code. We prove that the \textsc{ETAC}-code achieves the lower bound on the minimax redundancy within a factor logarithmic in the sequence length, and can be therefore qualified as a near-adaptive code over families of heavy-tailed envelopes. For finite and light-tailed envelopes the penalty is even less, and the same code follows closely previous results that explicitly made the light-tailed assumption. Our technical results are founded on methods from regular variation theory and concentration of measure

    Large alphabets: Finite, infinite, and scaling models

    Get PDF
    How can we effectively model situations with large alphabets? On a pragmatic level, any engineered system, be it for inference, communication, or encryption, requires working with a finite number of symbols. Therefore, the most straight-forward model is a finite alphabet. However, to emphasize the disproportionate size of the alphabet, one may want to compare its finite size with the length of data at hand. More generally, this gives rise to scaling models that strive to capture regimes of operation where one anticipates such imbalance. Large alphabets may also be idealized as infinite. The caveat then is that such generality strips away many of the convenient machinery of finite settings. However, some of it may be salvaged by refocusing the tasks of interest, such as by moving from sequence to pattern compression, or by minimally restricting the classes of infinite models, such as via tail properties. In this paper we present an overview of models for large alphabets, some recent results, and possible directions in this area

    Rare Probability Estimation under Regularly Varying Heavy Tails

    Get PDF
    This paper studies the problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown, possibly infinite, discrete distribution. In particular, we study the multiplicative consistency of estimators, defined as the ratio of the estimate to the true quantity converging to one. We first show that the classical Good-Turing estimator is not universally consistent in this sense, despite enjoying favorable additive properties. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly and have the absolute discounting structure of many heuristic algorithms. This also establishes a discrete parallel to extreme value theory, and many of the techniques therein can be adapted to the framework that we set forth.National Science Foundation (U.S.) (Grant 6922470)United States. Office of Naval Research (Grant 6918937

    On inference about rare events

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 75-77).Despite the increasing volume of data in modern statistical applications, critical patterns and events have often little, if any, representation. This is not unreasonable, given that such variables are critical precisely because they are rare. We then have to raise the natural question: when can we infer something meaningful in such contexts? The focal point of this thesis is the archetypal problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown discrete distribution. Our first contribution is to show that the classical Good-Turing estimator that is used in this problem has performance guarantees that are asymptotically non-trivial only in a heavy-tail setting. This explains the success of this method in natural language modeling, where one often has Zipf law behavior. We then study the strong consistency of estimators, in the sense of ratios converging to one. We first show that the Good-Turing estimator is not universally consistent. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being strongly consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly. This framework is a close parallel to extreme value theory, and many of the techniques therein can be adopted into the model set forth in this thesis. Lastly, we consider a different model that captures situations of data scarcity and large alphabets, and which was recently suggested by Wagner, Viswanath and Kulkarni. In their rare-events regime, one scales the finite support of the distribution with the number of samples, in a manner akin to high-dimensional statistics. In that context, we propose an approach that allows us to easily establish consistent estimators for a large class of canonical estimation problems. These include estimating entropy, the size of the alphabet, and the range of the probabilities.by Mesrob I. Ohannessian.Ph.D

    Simulation and visualization of fields and energy flows in electric circuits with idealized geometries

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2005.Includes bibliographical references (p. 63-64).This thesis develops a method to simulate and visualize the fields and energy flows in electric circuits, using a simplified physical model based on an idealized geometry. The physical models combine and extend previously proposed models, to produce a rich array of interactive configurations of circuits. For example, both driven and undriven series RLC circuits can be simulated. The computation underlying the simulations is primarily the numerical solution of several first order differential equations and of a boundary value problem. The proposed visualization of these numerical results provide an appealing and physically meaningful representation of the fields and electromagnetic energy flows in these circuits.by Mesrob I. Ohannessian.S.M

    Concentration inequalities in the infinite urn scheme for occupancy counts and the missing mass, with applications

    Get PDF
    An infinite urn scheme is defined by a probability mass function (pj)j1(p_j)_{j\geq1} over positive integers. A random allocation consists of a sample of NN independent drawings according to this probability distribution where NN may be deterministic or Poisson-distributed. This paper is concerned with occupancy counts, that is with the number of symbols with rr or at least rr occurrences in the sample, and with the missing mass that is the total probability of all symbols that do not occur in the sample. Without any further assumption on the sampling distribution, these random quantities are shown to satisfy Bernstein-type concentration inequalities. The variance factors in these concentration inequalities are shown to be tight if the sampling distribution satisfies a regular variation property. This regular variation property reads as follows. Let the number of symbols with probability larger than xx be ν(x)={j:pjx}\vec{\nu}(x)=|\{j:p_j\geq x\}|. In a regularly varying urn scheme, ν\vec{\nu} satisfies limτ0ν(τx)/ν(τ)=xα\lim_{\tau \rightarrow0}\vec{\nu}(\tau x)/\vec{\nu}(\tau)=x^{-\alpha} for α[0,1]\alpha\in[0,1] and the variance of the number of distinct symbols in a sample tends to infinity as the sample size tends to infinity. Among other applications, these concentration inequalities allow us to derive tight confidence intervals for the Good--Turing estimator of the missing mass.Comment: Published at http://dx.doi.org/10.3150/15-BEJ743 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
    corecore